24 research outputs found

    Guaranteed Conformance of Neurosymbolic Models to Natural Constraints

    Get PDF
    Deep neural networks have emerged as the workhorse for a large section of robotics and control applications, especially as models for dynamical systems. Such data-driven models are in turn used for designing and verifying autonomous systems. This is particularly useful in modeling medical systems where data can be leveraged to individualize treatment. In safety-critical applications, it is important that the data-driven model is conformant to established knowledge from the natural sciences. Such knowledge is often available or can often be distilled into a (possibly black-box) model MM. For instance, the unicycle model for an F1 racing car. In this light, we consider the following problem - given a model MM and state transition dataset, we wish to best approximate the system model while being bounded distance away from MM. We propose a method to guarantee this conformance. Our first step is to distill the dataset into few representative samples called memories, using the idea of a growing neural gas. Next, using these memories we partition the state space into disjoint subsets and compute bounds that should be respected by the neural network, when the input is drawn from a particular subset. This serves as a symbolic wrapper for guaranteed conformance. We argue theoretically that this only leads to bounded increase in approximation error; which can be controlled by increasing the number of memories. We experimentally show that on three case studies (Car Model, Drones, and Artificial Pancreas), our constrained neurosymbolic models conform to specified MM models (each encoding various constraints) with order-of-magnitude improvements compared to the augmented Lagrangian and vanilla training methods

    Guaranteed Conformance of Neurosymbolic Models to Natural Constraints

    Get PDF
    Deep neural networks have emerged as the workhorse for a large section of robotics and control applications, especially as models for dynamical systems. Such data-driven models are in turn used for designing and verifying autonomous systems. This is particularly useful in modeling medical systems where data can be leveraged to individualize treatment. In safety-critical applications, it is important that the data-driven model is conformant to established knowledge from the natural sciences. Such knowledge is often available or can often be distilled into a (possibly black-box) model M. For instance, the unicycle model for an F1 racing car. In this light, we consider the following problem - given a model M and state transition dataset, we wish to best approximate the system model while being bounded distance away from M. We propose a method to guarantee this conformance. Our first step is to distill the dataset into few representative samples called memories, using the idea of a growing neural gas. Next, using these memories we partition the state space into disjoint subsets and compute bounds that should be respected by the neural network, when the input is drawn from a particular subset. This serves as a symbolic wrapper for guaranteed conformance. We argue theoretically that this only leads to bounded increase in approximation error; which can be controlled by increasing the number of memories. We experimentally show that on three case studies (Car Model, Drones, and Artificial Pancreas), our constrained neurosymbolic models conform to specified M models (each encoding various constraints) with order-of-magnitude improvements compared to the augmented Lagrangian and vanilla training methods

    Memory-Consistent Neural Networks for Imitation Learning

    Full text link
    Imitation learning considerably simplifies policy synthesis compared to alternative approaches by exploiting access to expert demonstrations. For such imitation policies, errors away from the training samples are particularly critical. Even rare slip-ups in the policy action outputs can compound quickly over time, since they lead to unfamiliar future states where the policy is still more likely to err, eventually causing task failures. We revisit simple supervised ``behavior cloning'' for conveniently training the policy from nothing more than pre-recorded demonstrations, but carefully design the model class to counter the compounding error phenomenon. Our ``memory-consistent neural network'' (MCNN) outputs are hard-constrained to stay within clearly specified permissible regions anchored to prototypical ``memory'' training samples. We provide a guaranteed upper bound for the sub-optimality gap induced by MCNN policies. Using MCNNs on 9 imitation learning tasks, with MLP, Transformer, and Diffusion backbones, spanning dexterous robotic manipulation and driving, proprioceptive inputs and visual inputs, and varying sizes and types of demonstration data, we find large and consistent gains in performance, validating that MCNNs are better-suited than vanilla deep neural networks for imitation learning applications. Website: https://sites.google.com/view/mcnn-imitationComment: 22 pages (9 main pages

    Exploring with Sticky Mittens: Reinforcement Learning with Expert Interventions via Option Templates

    Get PDF
    Long horizon robot learning tasks with sparse rewards pose a significant challenge for current reinforcement learning algorithms. A key feature enabling humans to learn challenging control tasks is that they often receive expert intervention that enables them to understand the high-level structure of the task before mastering low-level control actions. We propose a framework for leveraging expert intervention to solve long-horizon reinforcement learning tasks. We consider option templates, which are specifications encoding a potential option that can be trained using reinforcement learning. We formulate expert intervention as allowing the agent to execute option templates before learning an implementation. This enables them to use an option, before committing costly resources to learning it. We evaluate our approach on three challenging reinforcement learning problems, showing that it outperforms state-of-the-art approaches by two orders of magnitude

    Distributionally Robust Statistical Verification with Imprecise Neural Networks

    Full text link
    A particularly challenging problem in AI safety is providing guarantees on the behavior of high-dimensional autonomous systems. Verification approaches centered around reachability analysis fail to scale, and purely statistical approaches are constrained by the distributional assumptions about the sampling process. Instead, we pose a distributionally robust version of the statistical verification problem for black-box systems, where our performance guarantees hold over a large family of distributions. This paper proposes a novel approach based on a combination of active learning, uncertainty quantification, and neural network verification. A central piece of our approach is an ensemble technique called Imprecise Neural Networks, which provides the uncertainty to guide active learning. The active learning uses an exhaustive neural-network verification tool Sherlock to collect samples. An evaluation on multiple physical simulators in the openAI gym Mujoco environments with reinforcement-learned controllers demonstrates that our approach can provide useful and scalable guarantees for high-dimensional systems

    Creating Statistically Anisotropic and Inhomogeneous Perturbations

    Get PDF
    In almost all structure formation models, primordial perturbations are created within a homogeneous and isotropic universe, like the one we observe. Because their ensemble averages inherit the symmetries of the spacetime in which they are seeded, cosmological perturbations then happen to be statistically isotropic and homogeneous. Certain anomalies in the cosmic microwave background on the other hand suggest that perturbations do not satisfy these statistical properties, thereby challenging perhaps our understanding of structure formation. In this article we relax this tension. We show that if the universe contains an appropriate triad of scalar fields with spatially constant but non-zero gradients, it is possible to generate statistically anisotropic and inhomogeneous primordial perturbations, even though the energy momentum tensor of the triad itself is invariant under translations and rotations.Comment: 20 pages, 1 figure. Uses RevTeX

    DeepSearch: A Simple and Effective Blackbox Attack for Deep Neural Networks

    Full text link
    Although deep neural networks have been very successful in image-classification tasks, they are prone to adversarial attacks. To generate adversarial inputs, there has emerged a wide variety of techniques, such as black- and whitebox attacks for neural networks. In this paper, we present DeepSearch, a novel fuzzing-based, query-efficient, blackbox attack for image classifiers. Despite its simplicity, DeepSearch is shown to be more effective in finding adversarial inputs than state-of-the-art blackbox approaches. DeepSearch is additionally able to generate the most subtle adversarial inputs in comparison to these approaches

    Trajectory Tracking Control for Robotic Vehicles Using Counterexample Guided Training of Neural Networks

    No full text
    We investigate approaches to train neural networks for controlling vehicles to follow a fixed reference trajectory robustly, while respecting limits on their velocities and accelerations. Here robustness means that if a vehicle starts inside a fixed region around the reference trajectory, it remains within this region while moving along the reference from an initial set to a target set. We consider the combination of two ideas in this paper: (a) demonstrations of the correct control obtained from a model-predictive controller (MPC) and (b) falsification approaches that actively search for violations of the property, given a current candidate. Thus, our approach creates an initial training set using the MPC loop and builds a first candidate neural network controller. This controller is repeatedly analyzed using falsification that searches for counterexample trajectories, and the resulting counterexamples are used to create new training examples. This process proceeds iteratively until the falsifier no longer succeeds within a given computational budget. We propose falsification approaches using a combination of random sampling and gradient descent to systematically search for violations. We evaluate our combined approach on a variety of benchmarks that involve controlling dynamical models of cars and quadrotor aircraft

    Retracted: When a giant ovarian cyst poses a diagnostic dilemma

    No full text
    The article " When a giant ovarian cyst poses a diagnostic dilemma" is retracted by the Editor-in-Chief, on the request of corresponding author and co-authors. The corresponding author informed that the patient described in this article, although willingly gave her consent for revealing her clinical data for publication, later withdrew her consent after knowing about the publication of clinical material during she came for a follow up visit
    corecore